Experiments with Test Case Generation and Runtime Analysis

نویسندگان

  • Cyrille Artho
  • Doron Drusinsky
  • Allen Goldberg
  • Klaus Havelund
  • Michael R. Lowry
  • Corina S. Pasareanu
  • Grigore Rosu
  • Willem Visser
چکیده

Software testing is typically an ad hoc process where human testers manually write many test inputs and expected test results, perhaps automating their execution in a regression suite. This process is cumbersome and costly. This paper reports preliminary results on an approach to further automate this process. The approach consists of combining automated test case generation based on systematically exploring the program's input domain, with runtime analysis, where execution traces are monitored and verified against temporal logic specifications, or analyzed using advanced algorithms for detecting concurrency errors such as data races and deadlocks. The approach suggests to generate specifications dynamically per input instance rather than statically once-and-for-all. The paper describes experiments with variants of this approach in the context of two examples, a planetary rover controller and a space craft fault protection system.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Measurement-Based Worst-Case Execution Time Analysis using Automatic Test-Data Generation

Traditional worst-case execution time (WCET) analysis methods based on static program analysis require a precise timing model of a target processor. The construction of such a timing model is expensive and time consuming. In this paper we present a hybrid WCET analysis framework using runtime measurements together with static program analysis. The novel aspect of this framework is that it uses ...

متن کامل

Automatic Detection of Parameter Shielding for Test Case Generation

Parameter shielding refers to the situation that one test parameter disables others in test execution. The quality of test case generation techniques is limited by the wide existence of parameter shielding. It is challenging to automatically find out conditions that cause the parameter shielding. This paper presents a novel approach for exploring the shielding conditions of test parameters. Our...

متن کامل

Automatically Evaluating the Efficiency of Search-Based Test Data Generation for Relational Database Schemas

The characterization of an algorithm’s worst-case time complexity is useful because it succinctly captures how its runtime will grow as the input size becomes arbitrarily large. However, for certain algorithms—such as those performing search-based test data generation—a theoretical analysis to determine worst-case time complexity is difficult to generalize and thus not often reported in the lit...

متن کامل

Combining test case generation and runtime verification

Software testing is typically an ad-hoc process where human testers manually write test inputs and descriptions of expected test results, perhaps automating their execution in a regression suite. This process is cumbersome and costly. This paper reports results on a framework to further automate this process. The framework consists of combining automated test case generation based on systematic...

متن کامل

Programming Language and Tools for Automated Testing

Software testing is a necessary and integral part of the software quality process. It is estimated that inadequate testing infrastructure cost the US economy between $22.2 and $59.5 billion. We present Sulu, a programming language designed with automated unit testing specifically in mind, as a demonstration of how software testing may be more integrated and automated into the software developme...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003